What's the optimal way to sort your bookshelf? Sorting theory offers some guidance:
- Don't alphabetize. Alphabetical order takes more time to scan than the seconds saved in faster location. Especially if you're more likely to browse than look for a specific title.
- Don't group by category. In the age of search, sorting into genres, topics, or "like with like" is unnecessary. It adds more time than it saves.
- Do optimize for browsing. Put your favorite titles at eye level. Less accessed ones up high or down low.
- Do make it a "cache." Put the most recently read or acquired books in the most visible spot. They're most likely to be accessed again soon. Rotate books in and out.
The same principles apply to organizing your office, kitchen, or life.
Section: 1, Chapter: 3
Computer science has categorized many sorting algorithms and identified the "periodic table" of how they relate. The key attributes are:
- Runtime: How long the algorithm takes, measured in Big O notation. Bubble sort is O(n^2), merge sort is O(n log n). This measures both average and worst case.
- Stability: A "stable" sort keeps items with the same key in the same relative order. An unstable sort might scramble them.
- Memory: How much RAM the algorithm uses. Merge sort is not in-place, so it requires O(n) extra space. Heapsort is in-place, requiring only O(1) extra space.
This "periodic table" helps programmers pick the right algorithm for a given job. For example:
- For general sorting, merge sort or quicksort are fast, but not stable or in-place.
- For mostly sorted data, insertion sort is simple, stable, in-place, but O(n^2) worst case.
- For huge datasets that don't fit in RAM, mergesort's O(n) extra space is infeasible.
Section: 1, Chapter: 3
At the heart of computer networking are three key principles that also apply to human communication:
- Packet switching: Information should be broken into small, discrete packets rather than sent in one big lump. In conversation, this means chunking ideas into sentences and pausing for reactions, rather than monologuing.
- Acknowledgments: Recipients should signal when they've received each packet, so the sender knows everything arrived. In human terms, backchannels like "uh huh" and "I see" are crucial for acknowledgement.
- Retransmission: If a packet goes unacknowledged, the sender should retry a few times before giving up and moving on. Socially, if someone doesn't respond to a question or comment, it's worth repeating or rephrasing once or twice.
Packet switching prevents overload, acknowledgments catch what's dropped, and retransmission fixes the remaining holes.
Section: 1, Chapter: 10
When deciding what to keep and what to discard, whether it's for your closet, your bookshelf, or your computer's memory, consider two factors:
- Frequency: How often is this item used or accessed? Things used most often should be kept close at hand. This is why your computer's RAM is faster than its hard disk.
- Recency: When was this item last used? Items used more recently are more likely to be used again soon. So the most recently used items should also be kept easily accessible.
Many caching algorithms, like Least Recently Used (LRU), primarily consider recency. But the optimal approach, as used by the human brain, balances both frequency and recency.
Section: 1, Chapter: 4
Chapter 7 explores how imitation learning - having machines learn by observing and mimicking human behavior - is both a distinctively human capability and a promising approach to building flexible AI systems.
- Humans are unique in our ability and proclivity to imitate, which is a foundation of our intelligence. Even infants just a few days old can mimic facial expressions.
- Imitation is powerful because it allows learning from a small number of expert demonstrations rather than extensive trial-and-error. It also enables learning unspoken goals and intangible skills.
- Techniques like inverse reinforcement learning infer reward functions from examples of expert behavior, enabling machines to adopt the goals and values implicit in the demonstrated actions.
- Imperfect imitation that captures the demonstrator's underlying intent can actually produce behavior that surpasses that of the teacher. This "value alignment" may be essential for building beneficial AI systems.
- But imitation also has pitfalls - it tends to break down when the imitator has different capabilities than the demonstrator, or encounters novel situations. So imitation is powerful, but no panacea.
The big picture is that imitation learning is a distinctively human form of intelligence that is also a promising path to more human-compatible AI systems. But it must be thoughtfully combined with other forms of learning and adaptation to achieve robust real-world performance.
Section: 3, Chapter: 7
Books about Computer Science
Business
Technology
Management
Inspired Book Summary
Marty Cagan
"Inspired" reveals the essential mindsets, principles and techniques used by leading tech companies to create products that customers love.
Futurism
Artificial Intelligence
Technology
Computer Science
Power And Prediction Book Summary
Ajay Agrawal, Joshua Gans, Avi Goldfarb
"Power and Prediction" argues that the true potential of AI lies not in automating individual tasks, but in enabling the redesign of entire systems and decision-making processes, which will lead to significant shifts in economic and political power as AI evolves from a tool for prediction into a catalyst for transformation.
Computer Science
Decision Making
Algorithms To Live By Book Summary
Brian Christian
Algorithms to Live By reveals how computer algorithms can solve many of life's most vexing human problems, from finding a spouse to folding laundry, by providing a blueprint for optimizing everyday decisions through the lens of computer science.
Artificial Intelligence
Computer Science
Futurism
The Alignment Problem Book Summary
Brian Christian
The Alignment Problem explores the challenge of ensuring that as artificial intelligence systems grow more sophisticated, they reliably do what we want them to do - and argues that solving this "AI alignment problem" is crucial not only for beneficial AI, but for understanding intelligence and agency more broadly.